Appendix for Maximum Weighted Likelihood via Rival Penalized EM for Density Mixture Clustering with Automatic Model Selection

نویسنده

  • Yiu-ming Cheung
چکیده

Supposing k is equal to the true mixture number k 1⁄4 3, we randomly located three seed points m1, m2, and m3 in the input space as shown in Fig. 2a, where the data constitute three well-separated clusters. Moreover, we initialized each of the js to be an identity matrix, and all js to be zero, i.e., we initialized 1 1⁄4 2 1⁄4 3 1⁄4 13 . Also, we set the learning rates 1⁄4 0:001 and 1⁄4 0:0001. We performed the learning of RPEM and showed the Q value of (26) over the epochs in Fig. 2b. It can be seen that the Q value has converged after 40 epochs. Fig. 2a shows the positions of three converged seed points, which are all stably located at the corresponding cluster centers. A snapshot of the converged parameter values is:

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

A Batch Rival Penalized Expectation-Maximization Algorithm for Gaussian Mixture Clustering with Automatic Model Selection

Within the learning framework of maximum weighted likelihood (MWL) proposed by Cheung, 2004 and 2005, this paper will develop a batch Rival Penalized Expectation-Maximization (RPEM) algorithm for density mixture clustering provided that all observations are available before the learning process. Compared to the adaptive RPEM algorithm in Cheung, 2004 and 2005, this batch RPEM need not assign th...

متن کامل

Expectation-MiniMax: A General Penalized Competitive Learning Approach to Clustering Analysis

In the literature, the Rival Penalized Competitive Learning (RPCL) algorithm (Xu et al. 1993) and its variants perform clustering analysis well without knowing the cluster number. However, such a penalization scheme is heuristically proposed without any theoretical guidance. In this paper, we propose a general penalized competitive learning approach named Expectation-MiniMax (EMM) Learning that...

متن کامل

Unsupervised learning of regression mixture models with unknown number of components

Regression mixture models are widely studied in statistics, machine learning and data analysis. Fitting regression mixtures is challenging and is usually performed by maximum likelihood by using the expectation-maximization (EM) algorithm. However, it is well-known that the initialization is crucial for EM. If the initialization is inappropriately performed, the EM algorithm may lead to unsatis...

متن کامل

Semi-supervised learning via penalized mixture model with application to microarray sample classification

MOTIVATION It is biologically interesting to address whether human blood outgrowth endothelial cells (BOECs) belong to or are closer to large vessel endothelial cells (LVECs) or microvascular endothelial cells (MVECs) based on global expression profiling. An earlier analysis using a hierarchical clustering and a small set of genes suggested that BOECs seemed to be closer to MVECs. By taking adv...

متن کامل

Penalized Bregman Divergence Estimation via Coordinate Descent

Variable selection via penalized estimation is appealing for dimension reduction. For penalized linear regression, Efron, et al. (2004) introduced the LARS algorithm. Recently, the coordinate descent (CD) algorithm was developed by Friedman, et al. (2007) for penalized linear regression and penalized logistic regression and was shown to gain computational superiority. This paper explores...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2005